ONNXModelHub
Parameters
cacheDirectory
The directory for all loaded models. It should be created before model loading and should have all required permissions for file writing/reading on your OS.
Constructors
ONNXModelHub
Link copied to clipboard
Functions
get
Link copied to clipboard
operator fun <T : InferenceModel, U : InferenceModel> get(modelType: ModelType<T, U>): U
Content copied to clipboard
loadModel
Link copied to clipboard
open override fun <T : InferenceModel, U : InferenceModel> loadModel(modelType: ModelType<T, U>, loadingMode: LoadingMode): T
Content copied to clipboard
Loads model configuration without weights.
fun loadModel(modelType: OnnxModelType<*>, vararg executionProviders: ExecutionProvider, loadingMode: LoadingMode = LoadingMode.SKIP_LOADING_IF_EXISTS): OnnxInferenceModel
Content copied to clipboard
This method loads model from ONNX model zoo corresponding to the specified modelType. The loadingMode parameter defines the strategy of existing model use-case handling. If loadingMode is LoadingMode.SKIP_LOADING_IF_EXISTS and the model is already loaded, then the model will be loaded from the local cacheDirectory. If loadingMode is LoadingMode.OVERRIDE_IF_EXISTS the model will be overridden even if it is already loaded. executionProviders is a list of execution providers which will be used for model inference.
loadPretrainedModel
Link copied to clipboard
fun <T : InferenceModel, U : InferenceModel> loadPretrainedModel(modelType: ModelType<T, U>, loadingMode: LoadingMode): U
Content copied to clipboard
Properties
cacheDirectory
Link copied to clipboard